![]() creation and generation of three-dimensional environment
专利摘要:
the present invention concerns the creation and generation of a three-dimensional (3d) environment. in an example, a 3d environment can be created using one or more models, in which two-dimensional (2d) representations of the models can be manipulated using a creation application. the models can comprise anchor points, which can be used to stitch the models together when rendering the 3d environment. in another example, a model can comprise one or more content points, which can be used to position content within the 3d environment. an environment data file can be generated based on one or more models and content associated with the content points, whereby a file is created that can be distributed to other computing devices. a display screen application can be used to generate the 3d environment based on the environment data file. in this sense, the display screen application can sew the models and popularize the 3d environment with the contents. 公开号:BR112019019556A2 申请号:R112019019556 申请日:2018-04-11 公开日:2020-04-22 发明作者:Handa Aniket;G Perez Carlos;Brett Marshall Colton;Anthony Martinez Molina Harold;Srinivasan Vidya 申请人:Microsoft Technology Licensing Llc; IPC主号:
专利说明:
Descriptive Report of the Invention Patent for CREATION AND GENERATION OF THREE-DIMENSIONAL ENVIRONMENT. Background [001] Publishing sites have been a key way to share and consume information on the network. A collection of services exists that democratizes the creation of a website for the network. However, there are no services to solve the problems of creating websites that realize the full potential of three-dimensional (3D) content. With increasing pressure to easily create and share 3D content, there is a need for tools and / or services that facilitate the creation and / or consummation of 3D content. [002] It is in relation to these and other general considerations that the modalities are described. In addition, although relatively specific problems are discussed, it should be understood that the modalities should not be limited to solving the specific problems identified in the background. Summary [003] The present invention relates to the creation and generation of a three-dimensional (3D) environment. In one example, a 3D environment can be created using one or more models, in which two-dimensional (2D) representations of the models can be selected and positioned using a design application. A model can comprise one or more anchor points, which can be used to stitch the model together with one or more other models when rendering the 3D environment. In another example, a model can comprise one or more content spots, which can be used to position content items within the 3D environment. An environment data file can be generated based on one or more models and the content associated with the content spots, whereby a file is created that can Petition 870190093906, of 9/19/2019, p. 15/69 2/35 be distributed to other computing devices. [004] A display screen application can be used to generate the 3D environment based on the environment data file. As an example, the display screen application can access models indicated by an environment data file and render the models based on a sewing operation in order to create an apparently continuous combination of the models. The display screen application can also populate the 3D environment with content based on the content points of the models. In this sense, it may be possible to easily create 3D environments according to the aspects disclosed in this document, even if a user has potentially little or no experience with 3D design. [005] This summary is provided to present a selection of concepts in a simplified manner which are described below later in the Detailed Description. This summary is not intended to identify key resources or essential resources of the claimed subject, nor is it intended to be used to limit the scope of the claimed subject. Brief Description of the Drawings [006] Non-limiting and non-exhaustive examples are described with reference to the following figures. [007] FIG. 1 illustrates an overview of an example system for creating and generating a three-dimensional environment. [008] FIG. 2 illustrates an overview of an example method for creating a three-dimensional environment using a two-dimensional representation. [009] FIG. 3 illustrates an overview of an example method for generating a three-dimensional environment. [0010] FIG. 4 illustrates an overview of an example user interface for creating a three-dimensional environment using two-dimensional models. Petition 870190093906, of 9/19/2019, p. 16/69 3/35 [0011] FIG. 5 illustrates an example view within a three-dimensional environment. [0012] FIG. 6 is a block diagram that illustrates an example of physical components of a computing device with which aspects of the description can be practiced. [0013] FIGS. 7A and 7B are simplified block diagrams of a mobile computing device with which aspects of the present description can be practiced. [0014] FIG. 8 is a simplified block diagram of a distributed computing system on which aspects of the present description can be practiced. [0015] FIG. 9 illustrates a tablet computing device for carrying out one or more aspects of the present description. Detailed Description [0016] In the detailed description that follows, references are made to the accompanying drawings that form part of it, and in which the specific modalities or examples are shown by means of illustrations. These aspects can be combined, other aspects can be used, and structural changes can be made without departing from the present description. The modalities can be practiced as methods, systems or devices. In this sense, the modalities can take the form of a hardware implementation, an implementation entirely in software, or an implementation that combines hardware and software aspects. The following detailed description, therefore, should not be taken in a limitative sense, and the scope of this description is defined by the appended claims and their equivalents. [0017] Aspects of the present description relate to the creation and generation of three-dimensional (3D) environments. In one example, a Petition 870190093906, of 9/19/2019, p. 17/69 4/35 3D environment can be created using a design application, in which a user of the design application can graphically select two-dimensional (2D) representations of models, which can be stored as an environment data file. The environment data file can then later be used to generate a 3D environment comprising 3D renderings of the selected models. In some examples, different types of content can be embedded or included in the 3D environment. Examples of content include, but are not limited to, 3D objects (e.g., 3D models, figures, shapes, etc.) or 2D objects (e.g., files, images, presentations, documents, web sites, videos, remote resources, etc.), among other contents. For example, a 3D environment can be a virtual space, such as a virtual reality (VR) world, or it can be a real-world space in which content can be viewed or layered over the real world, among other techniques VR or augmented reality (AR). [0018] A 3D environment created in accordance with the aspects disclosed in this document can then be consumed using a display screen application on a computing device such as a desktop computer or a smartphone. In one example, a 3D environment can be experienced across a wide spectrum of devices, ranging from low-end devices (for example, GOOGLE CARDBOARD) to high-end devices (for example, MICROSOFT HOLOLENS, OCULOUS RIFT, HTC VIVE, etc. .). Since the same 3D environment can be generated using desktop computing devices as well as mobile devices, additional overhead (for example, transmitting all required textures, light maps, audio files, etc.) can not be required for the generation of the 3D environment. In addition, platform or device-specific idiosyncrasies can be treated by the application. Petition 870190093906, of 9/19/2019, p. 18/69 5/35 captive of the display screen, in order to make such idiosyncrasies invisible to both the end user and the author of the 3D environment. [0019] A 3D environment can comprise one or more models, in which a model can comprise a virtual room, a virtual scene, or any other subpart of a virtual world. As described above, a user can use a design application to select, organize, and / or customize one or more models to create a 3D environment. The 3D environment can then be stored as an environment data file, where the environment data file can store information related to one or more models and / or content to be included in the 3D environment. A display screen application can be used to render the 3D environment based on the environment data file. The display screen application can comprise computing resources associated with the models used by the environment data file, so that the environment data file does not need to understand those resources. In some examples, the environment data file may comprise computing resources to be used in rendering the 3D environment, or the resources may be retrieved from a server or other remote location, among other examples. [0020] When rendering the 3D environment, the display screen application can identify one or more anchor points within a model, which can be used when sewing together the connected or adjacent models specified by the environment data file in a 3D environment. As an example, a model can comprise an entry anchor point and an exit anchor point, where the entry anchor point can indicate a door or other entry for the model and the exit anchor point can indicate an door or other outlet for the model. Thus, when sewing multiple models together (for example, Petition 870190093906, of 9/19/2019, p. 19/69 6/35 example, adjacent or connected models), the outlet anchor point of a first model can be used to position the entry anchor point of a second model (and, by extension, the second model), so that a continuous combination of models is created. In some examples, an anchor point can specify a direction, in which the direction of an entry anchor point can point in the direction of the model, while the direction of an exit anchor point can point away from the model. [0021] In certain aspects, a content spot can dictate where content can be positioned (for example, as a content item) within a 3D environment. In examples, a content or anchor point can define a rendering position within a model. In aspects, one or more anchor points can be included as part of a model (for example, as a null point object using a basic numerical convention), which can be used by a display screen application or other renderer to obtain and add children (for example, other models) to the anchor points. Similarly, content spots can be used to position content within the model. In this way, the 3D environment having content positioned at different anchor points and / or content points can be created without requiring information about the 3D environment before rendering. In some examples, a user who creates a 3D environment may be able to place anchor points and / or content points within a 3D environment without the need to add code. In other examples, a user may be able to add, move, or delete content points and / or anchor points from a model. [0022] In some examples, a set of models can be generated, in which different types of room can be predefined as Petition 870190093906, of 9/19/2019, p. 20/69 7/35 part of the set. The pattern set can be designed in such a way that sewing a pattern together with another pattern in the same pattern can form an apparently continuous pattern. In other examples, aspects of a model can be generated dynamically or programmatically. In one example, a model may indicate that certain aspects can be replaced depending on the model with which it can be sewn. As an example, a first model may indicate that a wall or arch can be replaced by a door, such that the entry point of a second model can be sewn to the first model on the door. It will be appreciated that other substitution or model generation techniques can be used without departing from the spirit of this description. [0023] FIG. 1 illustrates an overview of an example of system 100 for creating and generating a three-dimensional environment. As illustrated, system 100 comprises computing devices 102 and 104, and 3D environment service 106. In one example, computing devices 102 and 104 can be any of a variety of computing devices, including, but not limited to, computing. limited to, a mobile computing device, a laptop computing device, a tablet computing device, or a desktop computing device. In some examples, the 3D 106 environment service may be provided as part of a productivity, communication, or collaborative platform. It will be appreciated that while the 3D environment service 106 and elements 108-114 are illustrated separately from computing devices 102 and / or 104, one or more elements 108-114 can be provided by computing devices 102 and / or 104 in another examples. As an example, computing device 102 can comprise authoring application 108, while computing device 104 can Petition 870190093906, of 9/19/2019, p. 21/69 8/35 understand the display screen application 110. [0024] The 3D environment service 106 comprises the creation application 108, the display screen application 110, the model 112 data storage, and the created environment data storage 114. The creation application 108 can be used to create a 3D environment according to aspects disclosed in this document. In one example, the creation application 108 can display 2D representations of one or more 3D models, which can be selected, positioned, and / or customized by a user in order to create a 3D environment. A model can comprise one or more content points, which can be used by the user to position content within the 3D environment. In some examples, authoring app 108 can provide a variety of themes, where templates can be associated with one or more themes, or can be changed or adapted based on a theme selected by the user (for example, colors, textures , lighting, etc., can be modified). In examples, a model can be used for multiple themes, in which at least some of the geometric aspects of the model (for example, the layout, architectural or geographic features, etc.) can be unchanged, while the aesthetics of the model can be variable ( for example, the color scheme, lighting, audio, etc.). [0025] The creation application 108 can produce a 3D environment created as an environment data file, in which the environment data file comprises information associated with the selected models (for example, a model identifier, a model name , a type of model, etc.), positioning information (for example, coordinates, anchor point identifiers, etc.), content information (for example, what content should be displayed for one or more content points, the content to be displayed, a reference to the content, etc.), customization featuresPetition 870190093906, of 9/19/2019, p. 22/69 9/35 tion (for example, customization textures, sounds, etc.), among other information. In some examples, authoring application 108 can be a web-based application, where a user's computing device can access authoring application 108 using a web browser. In other examples, authoring application 108 can be an executable application, which can be retrieved and run by the user's computing device. [0026] The display screen application 110 can be used to generate, visualize, explore, and / or interact with the 3D environment based on an environment data file. In one example, the display screen application 110 can be a web-based application, where a user's computing device can access the display screen application 110 using a web browser. In other examples, the display screen application 110 may be an executable application, which can be retrieved and run by a user's computing device. According to aspects disclosed in this document, the display screen application can evaluate an environment data file to identify one or more models of a 3D environment. If an environment data file references a plurality of models, the models can be sewn together in the rendering of the 3D environment. The display screen application 110 can populate the rendered 3D environment with content at various content points of one or more models, based on the content specified by the environment data file. In one example, the display screen application 110 can use any of a variety of 3D rendering engines and can handle specific engine and / or device implementation details when rendering the 3D environment, so the author of the file environment data do not need to be familiar with the specific idiosyncrasies of the engine and / or the device. Petition 870190093906, of 9/19/2019, p. 23/69 10/35 [0027] The model 112 data store can store one or more models that can be used to create and / or generate a 3D environment. In one example, the models stored by the model data store 112 can be associated with one or more themes, such that a user of the authoring application 108 can select a theme and at the same time models that are associated with the selected theme. In some examples, a model set can be stored by the model data store 112, where different room types can be predefined as part of the set. The model set can be designed in such a way as to sew a model together with another model of the same set which can form an apparently continuous model. In other examples, aspects of a model stored in the model 112 data store can be generated dynamically or programmatically. In one example, a model can indicate that certain aspects can be replaced depending on the model with which it can be sewn. As an example, a first model may indicate that a wall or arch can be replaced by a door, such that an entry point of the second model can be sewn to the first model on the door. It will be appreciated that other substitution or model generation techniques can be used without departing from the spirit of this description. [0028] The data storage of created environments 114 can store one or more files of environment data. In some examples, an “environment data file” as used in this document can be a file in a file system, an entry in a database, or it can be stored using any of a variety of other techniques. data storage. A 3D environment created by the authoring application Petition 870190093906, of 9/19/2019, p. 24/69 11/35 108 can be stored in the data store of created environments 114. In one example, where the creation application 108 is an application that runs locally, at least part of the environment data file can be received from one of the computing devices 102 and 104, and stored using the data storage of created environments. In some examples, the display screen application 110 may retrieve an environment data file from the created environment data store 114, which, in conjunction with one or more models from the model 112 data store, can be retrieved. used to generate a 3D environment. In an example where the display screen application is an application that runs locally, a model data store can be stored locally and / or remotely with respect to the device that runs the application, and at least a portion of an application data file. environment can be retrieved from a data store of created environments 114. In some instances, the data file can be transmitted or retrieved in blocks, in order to reduce bandwidth consumption and / or improve responsiveness. It will be appreciated that other data storage and / or retrieval techniques can be used without departing from the spirit of this description. [0029] Applications 116 and 118 of computing devices 102 and 104, respectively, can be any of a variety of applications. In an example, applications 116 and / or 118 can be a creation application as described above, where a user of computing device 102 and / or 104 can use the application to create a 3D environment described by a data file of environment. In some examples, the environment data file can be stored by storing data from created environments 114. In another example, applications 116 and / or 118 can be an application. Petition 870190093906, of 9/19/2019, p. 25/69 12/35 vo of the display screen as described above, which can be used by a user of computing device 102 and / or 104 to view, render, and / or explore a 3D environment defined at least in part by a data file environment. In other examples, computing devices 102 and / or 104 may comprise a model data store similar to the model 112 data store and / or a created environment data store similar to the created environment data store 114. In For example, AR hardware and / or VR devices (not shown) can be attached to computing devices 102 and / or 104 and used to view and / or engage with a rendered 3D environment. For example, an AR or VR headset can be used. [0030] FIG. 2 illustrates an overview of an example of method 200 for creating a three-dimensional environment using a two-dimensional representation. In one example, aspects of method 200 can be performed by a computing device (for example, computing devices 102 and / or 104 in FIG. 1), or can be executed by a authoring application (for example, the application of breeding 108). The flow starts at operation 202, where an environment pattern can be selected. The environment pattern can define a general feeling and / or appearance of the 3D environment (for example, lighting, color scheme, textures, sounds, location, etc.). For example, an office pattern can be selected, which can be used to generate a 3D environment that represents an office, a garden pattern can be selected to generate a 3D environment that represents an outdoor space, etc. [0031] After selecting the environment pattern, the flow continues to operation 204 where a model selection can be received. As described in this document, one or more models can be Petition 870190093906, of 9/19/2019, p. 26/69 13/35 presented to a user when creating a 3D environment. In some examples, a set of models may be presented, in which a model of the set may have been designed to be sewn together with another model of the set, so that an apparently continuous model is generated. As an example, models in a set can have colors, textures, scale objects, or similar themes, etc. In aspects, the 3D environment can comprise one or more different models (for example, rooms, scenes, etc.). [0032] The flow progresses to operation 206, where a content spot can be selected within the selected model. As described above, a model can comprise one or more content spots, which can be used to display or deliver content in different positions within the model. After selecting a content spot, a menu can be generated that displays different types of content that can be positioned on the selected content spot. As an example, a user can select content that relates to 3D objects, videos, images, documents, presentations, spreadsheets, collections of objects, and the like. The menu displayed in operation 206 may be operable to receive user inputs that comprise a selection of one or more types of content to be positioned at the selected content spot. In some examples, several content points can be selected, either separately or together, in such a way that the content can be associated with several content points of the model selected in operation 206. [0033] In several aspects, a 3D environment can comprise several interconnected models. The flow continues for determination 208 where it is determined whether an additional model should be added to the 3D environment. In one example, determination can comprise Petition 870190093906, of 9/19/2019, p. 27/69 14/35 determining whether the user has provided an indication that another model should be added. In addition to receiving a selection of a new model, a positioning of the model in relation to one or more existing models can also be received. In one aspect, a user interface element can be positioned close to an anchor point for an existing model. After selecting the user interface element, a menu can be displayed that illustrates the types of models that can be connected to the existing model at the anchor point. Selectable models can have individual layouts and, in aspects, can have several different variants (for example, no door, one door, two doors, circular, square, indoor, outdoor, etc.). The menu can be operational to receive a model selection from the menu. After receiving the selection, a new model can be connected to the existing model at the anchor point. If an additional model is selected, the flow goes to the “YES” branch and returns to operation 204. The flow can then cycle between operations 204 and 208, so that it adds as many models to the 3D environment as there are desired by the user. [0034] However, if no additional rooms are added, instead the flow goes to the “NO” branch for operation 210. In operation 210, an environment data file describing the 3D environment created can be generated. In one example, the environment data file can store information related to one or more selected models and / or selected content for the content points of the models. The environment data file can be used by a display screen application to render a 3D environment according to the aspects disclosed in this document. In some examples, the environment data file may comprise computing resources to use when rendering the environment Petition 870190093906, of 9/19/2019, p. 28/69 15/35 3D or resources can be retrieved from a server or other remote location, among other examples. [0035] Moving to operation 212, the environment data file can be stored. The storage of the environment data file can comprise the generation of one or more output files or an entry in a database, among other storage techniques. In some examples, the environment data file can be provided to an environment data store created for access by other users, such as the environment data store created in FIG. 1. The flow ends at operation 212. In this sense, the method allows a user who does not have a technical 3D experience to design and create a 3D environment. [0036] FIG. 3 illustrates an overview of an example of method 300 for the generation of a three-dimensional environment. In one example, aspects of method 300 can be performed by a computing device (for example, computing devices 102 and / or 104 in FIG. 1), or can be performed by a display screen application (for example, display screen application 110). The flow starts at operation 302, where an environment data file can be retrieved. The environment data file can be retrieved from a remote or local data store. In some examples, the environment data file can be retrieved from a created environment data store, such as the created environment data store 114 of FIG. 1. In some examples, only a portion of the environment data file can be retrieved initially, while subsequent parts can be retrieved either on request or based on available computing resources, among other examples. [0037] The flow progresses to operation 304, where a model can Petition 870190093906, of 9/19/2019, p. 29/69 16/35 to be identified in the recovered environment data file. In an example, the model can be specified by a model identifier, a model name, etc. In another example, the model may be associated with other information, including, but not limited to, a number of inputs or outputs, or a theme. In some examples, the model can be selected from the environment data file based on the proximity of the model to the user's position in the 3D environment or based on an expected time to acquire the resources required to render the model, among other criteria . [0038] In operation 306, a 3D representation of the model can be rendered in the 3D environment. The model rendering can comprise access to resources associated with the model. In one example, resources can be stored locally or remotely, or in a combination of them. In some examples, a third-party rendering engine can be used to render the environment. In some examples, a model can be adjusted or modified before or during rendering. As an example, inputs and / or outputs can be dynamically updated according to the aspects disclosed in this document. In another example, the colors, lighting, or textures of a model can be changed. It will be appreciated that any of a variety of rendering techniques can be used without departing from the spirit of this description. [0039] Moving to operation 308, the content points of the model can be populated with content as indicated by the environment data file. As an example, a 2D representation of the content can be generated for a document, a web page, or other two-dimensional content. In another example, a 3D object can be rendered as floating in the model or it can be Petition 870190093906, of 9/19/2019, p. 30/69 17/35 positioned on a pedestal, among other contents. At least some of the content can be stored by the environment data file, can be stored locally at any other location on the device, or can be retrieved from a remote location. [0040] In determination 310, it can be determined whether the environment data file contains another model. In some examples, the determination can additionally comprise the evaluation of the available computing resources, in which the flow can pause in the determination 310 in such a way that the computing resources can be dedicated to the rendering of other parts of the 3D environment, among other operations . If it is determined that the environment data file does not contain another model, the flow goes to the “NO” branch for operation 316, where the rendered 3D environment can be presented to the user. In some examples, at least a partially rendered environment can be presented to the user earlier in method 300. In some examples, the user can be initially positioned in a welcome room or in a predefined location in the 3D environment. The flow ends at operation 316. [0041] If, however, it is determined in determination 310 that the environment data file comprises an additional model, instead the flow goes to the “YES” branch for operation 312, where the next model can be identified by environment data file. In an example, the model can be specified by a model identifier, a model name, etc. In another example, the model may be associated with other information, including, but not limited to, a number of inputs or outputs, or a theme. In some examples, the model can be selected from the environment data file based on the proximity of the model Petition 870190093906, of 9/19/2019, p. 31/69 18/35 with the user's position in the 3D environment or based on an expected time for the acquisition of the resources required for rendering the model, among other criteria. [0042] The flow progresses to operation 314, where the newly identified model can be sewn with the previous model. In one example, anchor points for both models can be identified and used to determine the location in which the next model should be rendered. For example, an exit anchor point from the previous model can be identified and used to determine a location for an entry anchor point from the new model. In other examples, a model can be adjusted (for example, replacing a wall with a door, refining textures, changing the scale, etc.). It will be appreciated that other operations can be performed to sew the two models without departing from the spirit of this description. The flow then moves to operation 306, where the new model can be rendered in the 3D environment, according to the seam determinations from operation 314. The flow then continues through operations 308 and 310, based on the newly identified model. Eventually, no additional models will be presented for rendering and the flow will end in operation 316 as discussed above. [0043] FIG. 4 illustrates a global view of an example of user interface 400 for creating a three-dimensional environment using two-dimensional models. The 3D environment example shown in user interface 400 comprises three different models, which, in the immediate example, are rooms: room 402, 404 and 406. In one example, star 426 indicates a starting position for a user's perspective when the 3D environment is initially rendered. In some examples, the starting position can be mobile using the 400 user interface, while in other examples, a type Petition 870190093906, of 9/19/2019, p. 32/69 19/35 of the “welcome” room can specify a user's starting position in the 3D environment. [0044] As shown, rooms 402, 404 and 406 comprise one or more content spots, such as content spot 416. Content spot 416 is illustrated as a check mark, thus indicating that content is associated with content spot 416. In contrast, content spot 408 is illustrated as a dark positive sign (compared to unassociated, gray content spots) to indicate that content spot 408 is currently selected. In response to the selection, menu 410 which displays different types of available content can be viewed. It will be appreciated that while menu 410 is illustrated as providing three content options, any of a variety of content can be selected as discussed in further detail above. Upon receipt of a selection of one of the available content types via menu 410, the selected content can be positioned at content spot 408. [0045] Anchor points 418, 420, 422 and 424 can indicate the anchor points for rooms 402, 404 and 406. While the anchor points may not be visible when the 3D environment is finally rendered, the user interface 400 can display entry anchor points 418 and 424 alongside exit anchor points 418 and 422 in order to illustrate the flow of the 3D environment and provide an indication of how rooms 402, 404 and 406 fit together. [0046] User interface 400 may also include one or more user interface elements that contribute to the addition of a new model to the connection point of an existing model. For example, the user interface element 412 may be operational to receive a selection to add a new room. After receipt Petition 870190093906, of 9/19/2019, p. 33/69 20/35 a selection in the user interface element 412, a menu room 414 can be displayed. Menu room 414 can display one or more different types of rooms that can be connected to connection points. Upon receipt of a room selection in menu room 414, a new room can be added to the displayed 2D representation of the 3D environment. While examples of rooms and models are discussed in this document, it will be appreciated that any of a variety of model and / or room types can be used without departing from the spirit of this description. [0047] FIG. 5 illustrates an example of view 500 within a three-dimensional environment. In one example, view 500 can be a view generated based on an environment data file according to the aspects disclosed in this document. View 500 can be a perspective of the user within a model (for example, room 402 in FIG. 4) of a 3D environment, such that an author of the 3D environment may have specified that content 502 and 504 should be presented to the user in the content points of the model. [0048] FIGS. 6-9 and the associated descriptions provide a discussion of a variety of operational environments in which aspects of the description can be practiced. However, the devices and systems illustrated and discussed with reference to FIGS. 6-9 are by way of example and illustration and are not limiting a vast number of configurations of computing devices that can be used to practice the aspects of the description, described in this document. [0049] FIG. 6 is a block diagram illustrating physical components (e.g., hardware) of a computing device 600 with which aspects of the description can be practiced. The computing device components described below may be suitable for the computing devices described above, including Petition 870190093906, of 9/19/2019, p. 34/69 21/35 going to computing devices 102 and 104 and 3D environment service 106. In a basic configuration, computing device 600 can include at least one processing unit 602 and system memory 604. Depending on the configuration and type computing device, system memory 604 may comprise, but is not limited to, volatile storage (e.g., random access memory), non-volatile storage (e.g., read-only memory), memory flash, or any combination of such memories. [0050] System memory 604 may include an operating system 605 and one or more program modules 606 suitable for running the software application 620, such as one or more components supported by the systems described in this document. As examples, system memory 604 can store a authoring application 624 and an environment data store 626. Operating system 605, for example, may be suitable for controlling the operation of computing device 600. [0051] In addition, the description modalities can be practiced in conjunction with graphics libraries, other operating systems, or any other application program and is not limited to any particular system or application. This basic configuration is illustrated in FIG. 6 for those components within a dashed line 608. The computing device 600 may have additional features or functionality. For example, computing device 600 may also include additional data storage devices (removable and / or non-removable) such as, for example, magnetic disks, optical disks, or tape. Such additional storage is illustrated in FIG. 6 by a removable storage device 609 and a non-removable storage device 610. [0052] As stated above, a number of program modules Petition 870190093906, of 9/19/2019, p. 35/69 22/35 ma and data files can be stored in system memory 604. While running on processing unit 602, program modules 606 (eg application 620) can run processes that include, but are not limited to a, aspects, as described in this document. Other program modules that can be used in accordance with aspects of this description may include e-mail and contact applications, word processing applications, spreadsheet applications, database applications, slide show applications, application programs drawing or computer aided, etc. [0053] In addition, the description modalities can be practiced in an electrical circuit that comprises discrete electronic elements, integrated or packaged electronic chips containing logic gates, a circuit that uses a microprocessor, or a single chip containing electronic elements or microprocessors. For example, the description modalities can be practiced by means of a system-on-a-chip (SOC) where each or more of the components illustrated in FIG. 6 can be integrated into a single integrated circuit. Such a SOC device can include one or more processing units, graphics units, communication units, system virtualization units and various application functionalities, all of which are integrated (or "burned") into the chip substrate as a single circuit integrated. When operating through a SOC, the functionality, described in this document, with respect to the customer's ability to switch protocols can be operated through a specific logic application integrated with other components on the computing device 600 on the single integrated circuit ( chip). The description modalities can also be practiced with the use of other technologies capable of performing logical operations such as, for example, AND, OR, and Petition 870190093906, of 9/19/2019, p. 36/69 23/35 NOT, including but not limited to mechanical, optical, fluid, and quantum technologies. In addition, the description modalities can be practiced with a general purpose computer or any other circuits or systems. [0054] Computing device 600 may also have one or more 612 input device (s) such as a keyboard, mouse, pen, sound or voice input device, touch or insert input device, etc. Output device (s) 614 such as a display screen, speakers, a printer, etc., can also be included. The devices mentioned above are exemplary and others can be used. The computing device 600 may include one or more communication connections 616 allowing communications with other computing devices 650. Examples of suitable communication connections 616 include, but are not limited to, a radio frequency (RF) transmitter, a receiver, and / or a transceiver circuit; bus would be universal (USB), serial and / or parallel ports. [0055] The term computer-readable medium as used in this document may include a computer storage medium. The computer storage medium may include a removable or non-removable, volatile or non-volatile medium, implemented in any method or technology for storing information, such as computer-readable instructions, data structures, or program modules. System memory 604, removable storage device 609, and non-removable storage device 610, are all examples of computer storage media (e.g., memory storage). Computer storage media can include RAM, ROM, electrically erasable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, versatile disk Petition 870190093906, of 9/19/2019, p. 37/69 24/35 digital (DVD) or other optical storage, magnetic cassette tapes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other article of manufacture that can be used to store information and that can be accessed by the device computing medium 600. Any of these computer storage media may be part of a 600 computing device. The computer storage medium does not include a carrier wave or other modulated or propagated data signal. [0056] The communication medium may be incorporated by computer-readable instructions, data structures, program modules, or other data into a modulated data signal, such as a carrier wave or other transport mechanism, and includes any means of communication. information delivery. The term "modulated data signal" can describe a signal that has one or more characteristics defined or altered in such a way as to encode information in the signal. As an example, and not by way of limitation, the communication medium may include a wired medium such as a cable network or a directly wired connection, and a wireless medium such as an acoustic, radio frequency (RF), infrared, and other wireless media. [0057] FIGS. 7A and 7B illustrate a mobile computing device 700, for example, a mobile phone, a smartphone, a wearable computer (such as a smart watch), a tablet computer, a laptop computer, and the like, with which description modalities can be practiced. In some ways, the customer may be a mobile computing device. With reference to FIG. 7A, an aspect of a mobile computing device 700 for implementing the aspects is illustrated. In a basic configuration, the mobile computing device 700 Petition 870190093906, of 9/19/2019, p. 38/69 25/35 is a handheld computer having both the input and output elements. The mobile computing device 700 typically includes a display screen 705 and one or more input buttons 710 that allow the user to enter information on the mobile computing device 700. The display screen 705 of the mobile computing device 700 can also function as a input device (for example, a touch screen display). [0058] If included, an optional side entry element 715 allows for additional user inputs. The side input element 715 can be a rotary switch, a button, or any other type of manual input element. In alternative aspects, the mobile computing device 700 can incorporate more or less input elements. For example, the 705 display screen may not be a touch screen in some modes. [0059] In yet another alternative embodiment, the mobile computing device 700 is a portable telephone system, such as a cell phone. The mobile computing device 700 can also include an optional keyboard 735. The optional keyboard 735 can be a physical keyboard or a “soft” keyboard generated on the touchscreen display screen. [0060] In various embodiments, the output elements include the display screen 705 to display the graphical user interface (GUI), a visual indicator 720 (for example, a light-emitting diode), and / or an audio transducer 725 (for example, a speaker). In some ways, the mobile computing device 700 incorporates a vibration transducer to provide the user with a tactile response. In yet another aspect, the mobile computing device 700 incorporates input and / or output ports, such as an audio input (for example, a microphone jack), and an audio output (for example, a headphone jack headphones), and a video output (for example, a Petition 870190093906, of 9/19/2019, p. 39/69 26/35 HDMI port) to send signals or receive signals from an external device. [0061] FIG. 7B is a block diagram that illustrates the architecture of an aspect of a mobile computing device. That is, the mobile computing device 700 can incorporate a system (for example, an architecture) 702 to implement some aspects. In one embodiment, the 702 system is implemented as a “smartphone” capable of running one or more applications (for example, a browser, email, calendar, contact management, messaging clients, games, and media players / clients). In some respects, the 702 system is integrated as a computing device, such as an integrated personal digital assistant (PDA) and a cordless phone. [0062] One or more 766 application programs can be loaded into memory 762 and run on or in association with the 764 operating system. Examples of application programs include phone dialer programs, email programs, personal information management programs (PIM), word processing programs, spreadsheet programs, Internet browser programs, messaging programs, and so on. The system 702 also includes a non-volatile storage area 768 within memory 762. The non-volatile storage area 768 can be used to store persistent information that should not be lost if the system 702 is turned off. Application programs 766 can use and store information in the non-volatile storage area 768, such as email or other messages used by an email application, and the like. A synchronization application (not shown) also resides on system 702 and is programmed to interact with a corresponding synchronization application residing on a host computer to maintain information stored in the non-volatile storage area 768 sin Petition 870190093906, of 9/19/2019, p. 40/69 27/35 timed with the corresponding information stored on the host computer. As should be appreciated, other applications can be loaded into memory 762 and run on the mobile computing device 700 described in this document (for example, search engines, extractor module, relevance ranking module, response scoring module, etc. ). [0063] System 702 has a 770 power source, which can be implemented as one or more batteries. The 770 power source may additionally include an external power source, such as an AC adapter or a powered charging base that supplements or recharges the batteries. [0064] System 702 may also include a radio interface layer 772 that performs the function of transmitting and receiving radio frequency communications. The radio interface layer 772 facilitates wireless connectivity between the 702 system and the “outside world”, via a communication carrier or a service provider. Transmissions to and from the radio interface layer 722 are conducted under the control of the 764 operating system. In other words, communications received by the radio interface layer 772 can be disseminated to application programs 766 through the operating system. 764, and vice versa. [0065] The visual indicator 720 can be used to provide visual notifications, and / or an audio interface 774 can be used to produce audible notifications via the audio transducer 725. In the illustrated embodiment, the visual indicator 720 is an emitting diode (LED) and the 725 audio transducer is a speaker. These devices can be coupled directly to the power source 770 so that when activated, they remain on for a duration dictated by the notification mechanism even though the 760 processor and other components can shut down to conserve battery power. Petition 870190093906, of 9/19/2019, p. 41/69 28/35 would have. The LED can be programmed to remain on indefinitely until the user takes action to indicate the connected state of the device. The 774 audio interface is used to provide audible signals to and receive audible signals from the user. For example, in addition to being connected to the audio transducer 725, the audio interface 774 can also be connected to a microphone to receive an audible input, in order to facilitate a telephone conversation. In accordance with the modalities of the present description, the microphone can also serve as an audio sensor to facilitate the control of notifications, as will be described below. The system 702 can additionally include a video interface 776 that enables an onboard camera 730 operation to record still images, video stream, and the like. [0066] A mobile computing device 700 implementing the system 702 may have additional functionality or features. For example, the mobile computing device 700 may also include additional data storage devices (removable and / or non-removable) such as magnetic disks, optical disks, or tapes. Such additional storage is illustrated in FIG. 7B through the non-volatile storage area 768. [0067] The data / information generated or captured by the mobile computing device 700 and stored via the system 702 can be stored locally on the mobile computing device 700, as described above, or the data can be stored on any number of media storage that can be accessed by the device through the radio interface layer 772 or through a wired connection between the mobile computing device 700 and a separate computing device associated with the mobile computing device 700, for example, a computer server on a distributed computing network such as Petition 870190093906, of 9/19/2019, p. 42/69 29/35 mo the Internet. As it is to be appreciated, such data / information can be accessed through the mobile computing device 700 through a radio interface layer 772 or through a distributed computing network. Similarly, such data / information can be readily transferred between computing devices for storage and use in accordance with well-known data / information storage and transfer media, including electronic mail and data sharing systems / collaborative information. [0068] FIG. 8 illustrates an aspect of a system architecture for processing data received in a computing system from a remote source, such as a personal computer 804, a tablet computing device 806, or a mobile computing device 808, such as described above. The content displayed on the 802 server device can be stored on different communication channels or other types of storage. For example, multiple documents can be stored using a directory service 822, a web portal 824, a mailbox service 826, an instant message store 828, or a social networking site 830. [0069] An 820 environment display screen application can be employed by a client communicating with the 802 server device, and / or the 821 environment data storage device can be employed by the 802 server device. The 802 server device can provide data to and from a customer's computing device such as a personal computer 804, a tablet computing device 806, and / or a mobile computing device 808 (for example, a smartphone) over the 815 network. For example, the computer system described above can be incorporated into an 804 personal computer, an 806 tablet computing device, Petition 870190093906, of 9/19/2019, p. 43/69 30/35 and / or an 808 mobile computing device (for example, a smartphone). Any of these modes of the computing device can obtain content from storage 816, in addition to receiving usable graphic data to be either processed in a graphics source system, or post-processed in a receiving computing system. [0070] FIG. 9 illustrates an example of a tablet computing device 900 that can perform one or more aspects disclosed in this document. In addition, the aspects and functionality described in this document can operate on distributed systems (for example, cloud-based computing systems), where application functionality, memory, storage and data retrieval, and various processing functions can be operated remotely , from each other, through a distributed computing network, such as the Internet or an intranet. User interfaces and information of various types can be displayed via display screens on board computing devices or via remote display units associated with one or more computing devices. For example, user interfaces and information of various types can be displayed and interact with a wall surface on which user interfaces and information of various types are designed. Interaction with a multiplicity of computer systems with which the modalities of the invention can be practiced include, data entry via keys, data entry via touch screen, data entry via voice or other audio, data entry via gestures where an associated computing device is equipped with detection functionality (for example, a camera) to capture and interpret the user's gestures to control the functionality of the computing device, and the like. [0071] As will be understood from the previous description, Petition 870190093906, of 9/19/2019, p. 44/69 31/35 one aspect of the technology refers to a system that comprises: at least one processor, and a memory that stores instructions that, when executed by at least one processor, cause the system to perform a set of operations. The set of operations comprises: the retrieval of an environment data file, in which the environment data file comprises a plurality of models for a three-dimensional (3D) environment; the generation, in the 3D environment, of a first 3D representation of a first model of the plurality of models, in which the first model is associated with an exit anchor point; determining whether the environment data file indicates that a second model of the plurality of models is adjacent to the first model, where the second model is associated with an entry anchor point; and based on the determination that the second model is adjacent to the first model, the generation of a second 3D representation of the second model, in which the representation of the second model is positioned in the 3D environment in such a way that the associated entry anchor point with the second model it is positioned close to the entry point of the first model. In one example, generating the first 3D representation of the first model comprises: determining that the environment data file specifies a content item associated with a content spot in the first model; and generating a representation of the content item at the content point of the first model in the first 3D representation. In another example, generating the representation of the content item comprises generating a request for a remote resource associated with the content item. In an additional example, the set of operations still comprises: the presentation of the 3D representation on a display screen of a user's device. In yet another example, the first model indicates a position Petition 870190093906, of 9/19/2019, p. 45/69 32/35 initial in a user's perspective, and in which the presentation of a 3D representation comprises the presentation of the 3D representation from the initial position to the user's perspective. In a further example, retrieving the environment data file comprises requesting at least a portion of the environment data file from a remote data store. In another example, the first model and the second model comprise a set of models having a similar theme. [0072] In another aspect, the technology relates to a method for the generation of an environment data file that represents a three-dimensional (3D) environment. The method comprises: receiving a user selection from an environment standard to the 3D environment; receiving a user selection of a first model, where the first model is associated with the selected environment standard; the presentation of a two-dimensional (2D) display screen of the first model, in which the 2D display screen comprises the display screen of one or more content points of the first model; receiving a user selection of a content spot from one or more content spots that indicate content to be displayed at the selected content spot; and generating the environment data file, where the environment data file comprises information about the selected environment pattern, the first template, and the selected content spot, where the selected content spot is associated with the content indicated. In one example, the method further comprises: the presentation of a display screen of one or more models associated with the selected environment pattern, in which the one or more models are displayed using 2D representations. In another example, the method also comprises: receiving a selection from a starting position to a user's perspective, in which the Petition 870190093906, of 9/19/2019, p. 46/69 33/35 selection is located within the first model; and storing the selection received as part of the environment data file. In an additional example, the method further comprises: receiving a selection from a second model, where the selection comprises an indication that the second model is positioned adjacent to the first model. In yet another example, the environment data file comprises an identifier associated with the first model and an identifier associated with the second model. In yet a further example, the method further comprises: storing the generated environment data file in a remote data store for access by one or more user devices. [0073] In an additional aspect, the technology relates to a method for the generation of a three-dimensional (3D) environment with the use of an environment data file. The method comprises: retrieving the environment data file, in which the environment data file comprises a plurality of models for the 3D environment; the generation, in the 3D environment, of a first 3D representation of a first model of the plurality of models, in which the first model is associated with an exit anchor point; determining whether the environment data file indicates that a second model of the plurality of models is adjacent to the first model, where the second model is associated with an entry anchor point; and based on the determination that the second model is adjacent to the first model, the generation of a second 3D representation of the second model, in which the representation of the second model is positioned in the 3D environment in such a way that the associated entry anchor point with the second model it is positioned close to the entry point of the first model. In one example, the generation of the first 3D representation of the Petition 870190093906, of 9/19/2019, p. 47/69 34/35 first model comprises: determining whether the environment data file specifies a content item associated with a content spot of the first model; and generating a representation of the content item at the content point of the first model in the first 3D representation. In another example, generating the representation of the content item comprises generating a request for a remote resource associated with the content item. In an additional example, the method further comprises: the presentation of the 3D representation on a display screen of a user's device. In yet another example, the first model indicates a starting position for a user's perspective, and in which the presentation of the 3D representation comprises the presentation of the 3D representation from the starting position for the user's perspective. In a further example, retrieving the environment data file comprises requesting at least a portion of the environment data file from a remote data store. In another example, the first model and the second model comprise a set of models having a similar theme. [0074] Aspects of the present description, for example, are described above with reference to the block diagram and / or the operational illustrations of methods, systems, and computer program products in accordance with the aspects of the description. The functions / acts noted in the blocks can occur out of order as shown in any flowchart. For example, two blocks shown in succession can, in fact, be executed substantially at the same time or the blocks can sometimes be executed in reverse order, depending on the functionality / acts involved. [0075] The description and illustration of one or more aspects provided in this request are not intended to limit or restrict the scope of the Petition 870190093906, of 9/19/2019, p. 48/69 35/35 description as claimed anyway. The aspects, examples, and details provided in this application are considered sufficient to express ownership and enable others to make and use the best form of the claimed description. The claimed description should not be construed as being limited in any aspect, example, or detail provided in this request. Regardless of whether shown or described in combination or separately, the various resources (both structural and methodological) are intended to be selectively included or omitted to produce a modality with a particular set of resources. Having been provided with the description and illustration of the present application, a technology expert can foresee variations, modifications, and alternative aspects that fall within the spirit of the broader aspects of the general inventive concept incorporated in this application that are not separate from the broader scope of claimed description. Petition 870190093906, of 9/19/2019, p. 49/69
权利要求:
Claims (15) [1] 1. System, characterized by the fact that it comprises: at least one processor; and a memory that stores instructions that, when executed by at least one processor, cause the system to perform a set of operations, the set of operations comprising: retrieving an environment data file, in which the environment data file comprises a plurality of models for a three-dimensional (3D) environment; the generation, in the 3D environment, of a first 3D representation of a first model of the plurality of models, in which the first model is associated with an exit anchor point; determining that the environment data file indicates that a second model of the plurality of models is adjacent to the first model, in which the second model is associated with an entry anchor point; and based on the determination that the second model is adjacent to the first model, the generation of a second 3D representation of the second model, in which the representation of the second model is positioned in the 3D environment in such a way that the associated entry anchor point with the second model it is positioned close to the entry point of the first model. [2] 2. System, according to claim 1, characterized by the fact that the generation of the first 3D representation of the first model comprises: determining that the environment data file specifies a content item associated with a content spot of the first model; and generating a representation of the content item in Petition 870190093906, of 9/19/2019, p. 50/69 2/5 content point of the first model in the first 3D representation. [3] 3. System, according to claim 1, characterized by the fact that the set of operations still comprises: present the 3D representation on a user device display screen. [4] 4. Computer-implemented method for generating an environment data file that represents a three-dimensional (3D) environment, characterized by the fact that it comprises: receive a user selection from an environment pattern to a 3D environment; receive a user selection of a first model, where the first model is associated with the selected environment pattern; present a two-dimensional (2D) display screen of the first model, in which the 2D display screen comprises a display screen of one or more content points of the first model; receive a user selection of a content spot from one or more content spots that indicates content to the display screen at the selected content spot; and generate an environment data file, where the environment data file comprises information about the selected environment pattern, the first template, and the selected content spot, where the selected content spot is associated with the indicated content . [5] 5. Method implemented by computer, according to claim 4, characterized by the fact that it still comprises presenting a display screen of one or more models associated with the selected environment pattern, in which the one or more models are visualized with the use of 2D representations. Petition 870190093906, of 9/19/2019, p. 51/69 3/5 [6] 6. Method implemented by computer, according to claim 4, characterized by the fact that it still comprises: receive a selection from a second model, where the selection comprises an indication that the second model is positioned adjacent to the first model. [7] 7. Computer-implemented method for generating a three-dimensional (3D) environment using an environment data file, characterized by the fact that it comprises: retrieving the environment data file, in which the environment data file comprises a plurality of models for the 3D environment; generate, in the 3D environment, a first 3D representation of a first model of the plurality of models, in which the first model is associated with an exit anchor point; determining that the environment data file indicates that a second model of the plurality of models is adjacent to the first model, in which the second model is associated with an entry anchor point; and based on the determination that the second model is adjacent to the first model, the generation of a second 3D representation of the second model, in which the representation of the second model is positioned in the 3D environment in such a way that the associated entry anchor point with the second model it is positioned in proximity to the entry point of the first model. [8] 8. Method, according to claim 7, characterized by the fact that it still comprises: present the 3D representation on a user device display screen. [9] 9. Method according to claim 8, characterized Petition 870190093906, of 9/19/2019, p. 52/69 4/5 due to the fact that the first model indicates an initial position in the user's perspective, and that the presentation of the 3D representation comprises the presentation of the 3D representation from the initial position in the user's perspective. [10] 10. Method according to claim 7, characterized by the fact that retrieving the environment data file comprises requesting at least a part of the environment data file from a remote data store. [11] 11. System, according to claim 3, characterized by the fact that the first model indicates an initial position in the user's perspective, and in which the presentation of the 3D representation comprises the presentation of the 3D representation from the initial position in the perspective of the user. user. [12] 12. System, according to claim 1, characterized by the fact that the first model and the second model comprise a set of models having a similar theme. [13] 13. Method implemented by computer, according to claim 4, characterized by the fact that it still comprises: receive a selection from a starting position from a user's perspective, where the selection is located within the first model; and store the received selection as part of the environment data file. [14] 14. Method implemented by computer, according to claim 6, characterized by the fact that the environment data file comprises an identifier associated with the first model and an identifier associated with the second model. [15] 15. Method, according to claim 7, characterized by the fact that the generation of the first 3D representation of the first model comprises: Petition 870190093906, of 9/19/2019, p. 53/69 5/5 determine that the environment data file specifies a content item associated with a content spot of the first model; and generate a representation of the content item at the content point of the first model in the first 3D representation.
类似技术:
公开号 | 公开日 | 专利标题 BR112019019556A2|2020-04-22|creation and generation of three-dimensional environment US9792665B2|2017-10-17|Real time visual feedback during move, resize and/or rotate actions in an electronic document US11189098B2|2021-11-30|3D object camera customization system KR20170019242A|2017-02-21|Method and apparatus for providing user interface in an electronic device US9727989B2|2017-08-08|Modifying and formatting a chart using pictorially provided chart elements US9164673B2|2015-10-20|Location-dependent drag and drop UI US20150277726A1|2015-10-01|Sliding surface AU2015241256A1|2016-10-06|Command user interface for displaying and scaling selectable controls and commands KR102274474B1|2021-07-06|Inset dynamic content preview pane EP2936350A1|2015-10-28|Navigating content hierarchies and persisting content item collections WO2015153524A1|2015-10-08|Transient user interface elements US11164395B2|2021-11-02|Structure switching in a three-dimensional environment WO2017058679A1|2017-04-06|Font typeface preview TW201519067A|2015-05-16|Creating visualizations from data in electronic documents US9383885B2|2016-07-05|Hit testing curve-based shapes using polygons US11048376B2|2021-06-29|Text editing system for 3D environment US10304225B2|2019-05-28|Chart-type agnostic scene graph for defining a chart US20170358125A1|2017-12-14|Reconfiguring a document for spatial context WO2014200848A1|2014-12-18|Persistent reverse navigation mechanism
同族专利:
公开号 | 公开日 US11138809B2|2021-10-05| EP3616032A1|2020-03-04| CA3056953A1|2018-11-01| CL2019002951A1|2020-03-13| WO2018200199A1|2018-11-01| CO2019011966A2|2020-01-17| CN110573997A|2019-12-13| ZA201905873B|2020-12-23| PH12019550188A1|2020-06-08| MX2019012624A|2020-01-30| CN110573224A|2019-12-13| US20200013236A1|2020-01-09| JP2020518071A|2020-06-18| JP2020518077A|2020-06-18| BR112019022129A2|2020-05-05| KR20190139902A|2019-12-18| WO2018200200A1|2018-11-01| RU2019137605A|2021-05-25| US20180308274A1|2018-10-25| MX2019012626A|2020-01-30| RU2765341C2|2022-01-28| ZA201905870B|2020-11-25| AU2018260575A1|2019-09-19| KR20190141162A|2019-12-23| US10388077B2|2019-08-20| RU2019137607A|2021-05-25| US10453273B2|2019-10-22| EP3615155A1|2020-03-04| PH12019550189A1|2020-06-29| CA3056956A1|2018-11-01| WO2018200201A1|2018-11-01| US20180308289A1|2018-10-25| EP3616043A1|2020-03-04| CN110832450A|2020-02-21| RU2019137607A3|2021-07-12| CL2019002950A1|2020-03-13| SG11201909455UA|2019-11-28| AU2018257944A1|2019-09-19| US20180308290A1|2018-10-25| SG11201909454QA|2019-11-28| CN110573997B|2021-12-03| CO2019011870A2|2020-01-17|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US5511157A|1993-12-13|1996-04-23|International Business Machines Corporation|Connection of sliders to 3D objects to allow easy user manipulation and viewing of objects| EP0959444A4|1996-08-14|2005-12-07|Nurakhmed Nurislamovic Latypov|Method for following and imaging a subject's three-dimensional position and orientation, method for presenting a virtual space to a subject, and systems for implementing said methods| US6050822A|1997-10-01|2000-04-18|The United States Of America As Represented By The Secretary Of The Army|Electromagnetic locomotion platform for translation and total immersion of humans into virtual environments| US7043695B2|2000-09-19|2006-05-09|Technion Research & Development Foundation Ltd.|Object positioning and display in virtual environments| US6646643B2|2001-01-05|2003-11-11|The United States Of America As Represented By The Secretary Of The Navy|User control of simulated locomotion| CA2347290C|2001-05-09|2006-10-17|Free Radical Design Limited|Methods and apparatus for constructing virtual environments| WO2002093352A1|2001-05-11|2002-11-21|3Dna Corp.|Method and system for generating a three-dimensional graphical user interface for computer systems and websites| US7269632B2|2001-06-05|2007-09-11|Xdyne, Inc.|Networked computer system for communicating and operating in a virtual reality environment| US20030146973A1|2001-11-09|2003-08-07|Swift David C|3D stereoscopic enabling methods for a monoscopic application to support 3D stereoscopic imaging| US7117450B1|2002-03-15|2006-10-03|Apple Computer, Inc.|Method and apparatus for determining font attributes| JP2006513503A|2002-11-29|2006-04-20|ブラッコ イメージング ソチエタ ペル アチオニ|Apparatus and method for managing a plurality of locations in a three-dimensional display| DE102004017730B4|2004-04-10|2006-05-24|Christian-Albrechts-Universität Zu Kiel|Method for rotational compensation of spherical images| US8585476B2|2004-11-16|2013-11-19|Jeffrey D Mullen|Location-based games and augmented reality systems| US7657406B2|2005-06-09|2010-02-02|Intepoint, Llc|Multi-infrastructure modeling system| US8473263B2|2005-06-09|2013-06-25|William J. Tolone|Multi-infrastructure modeling and simulation system| EP1946243A2|2005-10-04|2008-07-23|Intersense, Inc.|Tracking objects with markers| US8601386B2|2007-04-20|2013-12-03|Ingenio Llc|Methods and systems to facilitate real time communications in virtual reality| US8584025B2|2008-05-02|2013-11-12|International Business Machines Corporation|Virtual world teleportation| US20100045701A1|2008-08-22|2010-02-25|Cybernet Systems Corporation|Automatic mapping of augmented reality fiducials| US9067097B2|2009-04-10|2015-06-30|Sovoz, Inc.|Virtual locomotion controller apparatus and methods| ES2399636T3|2009-07-29|2013-04-02|Metaio Gmbh|Method to determine the position of a camera with respect to at least one real object| US20110270135A1|2009-11-30|2011-11-03|Christopher John Dooley|Augmented reality for testing and training of human performance| EP2381423A1|2010-03-26|2011-10-26|Alcatel Lucent|Method for transforming web from 2d into 3d| US20120233555A1|2010-11-08|2012-09-13|Eyelead Sa|Real-time multi-user collaborative editing in 3d authoring system| KR101971948B1|2011-07-28|2019-04-24|삼성전자주식회사|Marker-less augmented reality system using plane feature and method thereof| WO2013019961A2|2011-08-02|2013-02-07|Design Play Technologies Inc.|Real-time collaborative design platform| US10019962B2|2011-08-17|2018-07-10|Microsoft Technology Licensing, Llc|Context adaptive user interface for augmented reality display| US9311883B2|2011-11-11|2016-04-12|Microsoft Technology Licensing, Llc|Recalibration of a flexible mixed reality device| JP6250547B2|2011-11-23|2017-12-20|マジック リープ, インコーポレイテッドMagic Leap,Inc.|3D virtual reality and augmented reality display system| US8681179B2|2011-12-20|2014-03-25|Xerox Corporation|Method and system for coordinating collisions between augmented reality and real reality| US9311744B2|2012-01-09|2016-04-12|Fca Us Llc|System and method for generating an outer layer representation of an object| US9210413B2|2012-05-15|2015-12-08|Imagine Mobile Augmented Reality Ltd|System worn by a moving user for fully augmenting reality by anchoring virtual objects| US10203762B2|2014-03-11|2019-02-12|Magic Leap, Inc.|Methods and systems for creating virtual and augmented reality| US9429912B2|2012-08-17|2016-08-30|Microsoft Technology Licensing, Llc|Mixed reality holographic object development| US9251590B2|2013-01-24|2016-02-02|Microsoft Technology Licensing, Llc|Camera pose estimation for 3D reconstruction| US20140245160A1|2013-02-22|2014-08-28|Ubiquiti Networks, Inc.|Mobile application for monitoring and controlling devices| WO2014160342A1|2013-03-13|2014-10-02|The University Of North Carolina At Chapel Hill|Low latency stabilization for head-worn displays| US9407904B2|2013-05-01|2016-08-02|Legend3D, Inc.|Method for creating 3D virtual reality from 2D images| CN105378433B|2013-06-07|2018-01-30|诺基亚技术有限公司|Method and apparatus for adaptively showing location-based digital information| US10139623B2|2013-06-18|2018-11-27|Microsoft Technology Licensing, Llc|Virtual object orientation and visualization| KR102191867B1|2013-07-10|2020-12-16|엘지전자 주식회사|Apparatus and Method for Head Mounted Display Device with multiple user interface formats| US8860818B1|2013-07-31|2014-10-14|Apple Inc.|Method for dynamically calibrating rotation offset in a camera system| US9451162B2|2013-08-21|2016-09-20|Jaunt Inc.|Camera array including camera modules| KR101873127B1|2013-09-30|2018-06-29|피씨엠에스 홀딩스, 인크.|Methods, apparatus, systems, devices, and computer program products for providing an augmented reality display and/or user interface| CN103761996B|2013-10-18|2016-03-02|中广核检测技术有限公司|Based on the Non-Destructive Testing intelligent robot detection method of virtual reality technology| CA2843576A1|2014-02-25|2015-08-25|Evelyn J. Saurette|Computer-implemented method of real estate sales| US9817375B2|2014-02-26|2017-11-14|Board Of Trustees Of The University Of Alabama|Systems and methods for modeling energy consumption and creating demand response strategies using learning-based approaches| US9579797B2|2014-04-10|2017-02-28|Quanser Consulting Inc.|Robotic systems and methods of operating robotic systems| US9392212B1|2014-04-17|2016-07-12|Visionary Vr, Inc.|System and method for presenting virtual reality content to a user| AU2015270559A1|2014-06-02|2016-11-24|Apelab Sarl|A method and system for providing interactivity within a virtual environment| US9766460B2|2014-07-25|2017-09-19|Microsoft Technology Licensing, Llc|Ground plane adjustment in a virtual reality environment| US9892560B2|2014-09-11|2018-02-13|Nant Holdings Ip, Llc|Marker-based augmented reality authoring tools| US9911235B2|2014-11-14|2018-03-06|Qualcomm Incorporated|Spatial interaction in augmented reality| KR20210097818A|2014-12-18|2021-08-09|페이스북, 인크.|Method, system and device for navigating in a virtual reality environment| WO2016138178A1|2015-02-25|2016-09-01|Brian Mullins|Visual gestures for a head mounted device| US9520002B1|2015-06-24|2016-12-13|Microsoft Technology Licensing, Llc|Virtual place-located anchor| EP3332311B1|2015-08-04|2019-12-04|Google LLC|Hover behavior for gaze interactions in virtual reality| US10018847B2|2015-10-28|2018-07-10|Honeywell International Inc.|Methods of vestibulo-ocular reflex correction in display systems| CN105912310A|2015-12-04|2016-08-31|乐视致新电子科技(天津)有限公司|Method and device for realizing user interface control based on virtual reality application| US11010972B2|2015-12-11|2021-05-18|Google Llc|Context sensitive user interface activation in an augmented and/or virtual reality environment| US10419747B2|2015-12-22|2019-09-17|Google Llc|System and methods for performing electronic display stabilization via retained lightfield rendering| RU168332U1|2016-06-06|2017-01-30|Виталий Витальевич Аверьянов|DEVICE FOR INFLUENCE ON VIRTUAL AUGMENTED REALITY OBJECTS| US20170372499A1|2016-06-27|2017-12-28|Google Inc.|Generating visual cues related to virtual objects in an augmented and/or virtual reality environment| US20180308274A1|2017-04-25|2018-10-25|Microsoft Technology Licensing, Llc|Container-based virtual camera rotation|US20180308274A1|2017-04-25|2018-10-25|Microsoft Technology Licensing, Llc|Container-based virtual camera rotation| EP3486749A1|2017-11-20|2019-05-22|Nokia Technologies Oy|Provision of virtual reality content| US10225360B1|2018-01-24|2019-03-05|Veeva Systems Inc.|System and method for distributing AR content| US10916066B2|2018-04-20|2021-02-09|Edx Technologies, Inc.|Methods of virtual model modification| US20190391647A1|2018-06-25|2019-12-26|Immersion Corporation|Real-world haptic interactions for a virtual reality user| SG11202110312XA|2019-03-20|2021-10-28|Beijing Xiaomi Mobile Software Co Ltd|Method and device for transmitting viewpoint switching capabilities in a vr360 application| CN110010019B|2019-04-15|2021-07-06|珠海格力智能装备有限公司|Control method and device for assembling LED screen| US11039061B2|2019-05-15|2021-06-15|Microsoft Technology Licensing, Llc|Content assistance in a three-dimensional environment| US11048376B2|2019-05-15|2021-06-29|Microsoft Technology Licensing, Llc|Text editing system for 3D environment| US11164395B2|2019-05-15|2021-11-02|Microsoft Technology Licensing, Llc|Structure switching in a three-dimensional environment| US11030822B2|2019-05-15|2021-06-08|Microsoft Technology Licensing, Llc|Content indicators in a 3D environment authoring application| US11087560B2|2019-05-15|2021-08-10|Microsoft Technology Licensing, Llc|Normalization of objects for a 3D environment within an authoring application| US11010984B2|2019-06-05|2021-05-18|Sagan Works, Inc.|Three-dimensional conversion of a digital file spatially positioned in a three-dimensional virtual environment|
法律状态:
2021-10-19| B350| Update of information on the portal [chapter 15.35 patent gazette]|
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US201762489904P| true| 2017-04-25|2017-04-25| US15/636,125|US10388077B2|2017-04-25|2017-06-28|Three-dimensional environment authoring and generation| PCT/US2018/026994|WO2018200199A1|2017-04-25|2018-04-11|Three-dimensional environment authoring and generation| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|